23 November, 2024

In this presentation, you will learn:

  1. About R and RStudio
  2. Installing R and RStudio to your computer
  3. Importing dataset into R
  4. Meta-analysis in R
    • Introduction to meta-analysis packages
    • Installing and loading packages
    • Performing a meta-analysis
    • Visualizing risk of bias assessments
    • Performing and visualizing subgroup analyses
    • Performing leave-one-out sensitivity analysis
    • Performing publication bias assessments

Directions:

  • Press F to enable full screen
  • Press W to toggle widescreen mode
  • Press O to enable overview mode
  • Press H to enable code highlight mode
  • Press P to show presenter notes
  • Press Esc to exit all of the aforementioned modes

Disclaimer:

All datasets and materials used in this presentation are adapted from:

  1. Harrer M, Cuijpers P, Furukawa TA, Ebert DD. Doing Meta-Analysis in R: A hands-on guide. 2019. doi: 10.5281/zenodo.2551803
  2. Schwarzer G, Carpenter JR, Rücker G. Meta-analysis with R. Switzerland: Springer International Publishing; 2015. link
  3. McGuinness LA. Introduction to robvis, a visualization tool for risk-of-bias assessments. 2019 Nov 18 [cited 2021 Jan 21]. Available from: https://cran.r-project.org/web/packages/robvis/vignettes/Introduction_to_robvis.html


Example publications:

  • Lazarus G, Audrey J, Wangsaputra VK, Tamara A, Safitri ED, Tahapary DL. High admission blood glucose independently predicts poor prognosis in COVID-19 patients: A systematic review and dose-response meta-analysis. Diabetes Res Clin Pract. 2021 Jan 1;171:108561. doi: 10.1016/j.diabres.2020.108561

Getting started with R

About R and RStudio

What is R?

  • A free, open-source programming language and software environment for statistical computing and graphics.


What is RStudio?

  • A third-party software to ease the use of R through user-friendly graphical user interface.

Why R?

  • Free and Open-source
  • Large community of users, thus providing free online books to learn R and unlimited statistical utilities
  • Cutting-edge technology
  • Robust visualization library with the ggplot2 package
  • Gateway to a lucrative career
  • Easily create data analysis reports as documents (i.e. Word, PDF) and presentations for reproducibility with R Markdown
  • Many specialized packages in R not available in other softwares


Installing R and RStudio

Installing R

  1. Go to https://cran.r-project.org/

  2. Click ‘Download R for …’ [Choose according to your operating software]
  3. Follow the installation guide
  4. This is the interface of the installed software


Installing RStudio

  1. Follow the installation guide
  2. This is the interface of the installed software


Tips & Tricks in Using R

  • R is case-sensitive
  • The RStudio console features tab-code-completion and live help for function coding. Type libr into the RStudio console, wait a second, and when a window with library() appears, hit the Tab key.
  • Pressing the Up and Down arrow key will display a history of R codes used.
  • A substantial amount of time will typically be used for debugging and troubleshooting R codes. Statistical forums such as StackOverflow, CrossValidated, and Posit Community helps a lot.
  • Common utility in R
    • #’ in the beginning of a line to write a comment
    • ?package/function_name to open the help file of a package/function
    • citation("package_name") to cite a package
    • install.packages('package_name') and library(package_name) to install and load a package, respectively. Note that install.packages() only need to be performed once, whle library() need to be performed in every new session.
    • <- or = to specify a variable
    • '' or "" to include a string/character
    • c() to combine values into a vector or list

  • To explore a dataset:

    • View() to open a spreadsheet-style view of a dataset.
      • summary() to provide the min, max, mean, median, and first and third quartiles (interquartile range).
      • head() and tail() to view the specified number of rows at the beginning and the end of a dataset, respectively.
      • str() to examine the structure of a dataset.
  • It is preferrable to write R code in another text file. In the RStudio window, click File > New File > R Script or simultaneously press CTRL+Shift+N.

  • Common types of object classes in R:

    • char / character : a ‘scalar’ string (text) object [always enclose with '' or ""]
    • logical : a vector containing logical values
    • integer : a vector containing integer values
    • double : a vector containing real values
    • character : a vector containing character values

Importing dataset into R

Importing dataset from Excel

Loading dataset from various softwares require different commands. There are three ways to import Excel datasets into R.

  1. Using ‘read_excel’ function

    # Install the required package: 'readxl' for Excel and 'haven' for SAS, SPSS,
    # and STATA
    install.packages("readxl", dependencies = TRUE)
    
    # Load the package
    library(readxl)
    
    # Make sure that the file name path is correct.
    databin <- read_excel("data/Meta-Analysis_Data.xlsx")
    
    # To view the data, please note that 'V' is used instead of 'v'
    View(databin)
    
    # For Excel files containing multiple sheets and tables, use
    databin <- read_excel("file_name.xlsx", sheet = "sheet_name", range = "A1:D10")

    Reference: Harrer M, Cuijpers P, Furukawa TA, Ebert DD. Doing Meta-Analysis in R: A hands-on guide. 2019. doi: 10.5281/zenodo.2551803.

  1. Using ‘Import Dataset’ menu
    1. Click Import Dataset > From Excel

  1. Click Browse > Set the name of the variable at the Name column > Adjust the Sheet and Row > Click Import


  1. Importing files directly from the working directory
    1. Click the respective file from the Working Directory > Click Import Dataset



    1. Set the name of the variable at the Name column > Adjust the Sheet and Row > Click Import

If loaded correctly, the table below will show up at the Source window

Author TE seTE rob control intervention duration intervention type population type of students
Call et al.  0.7091362 0.2608202 low WLC short mindfulness undergraduate students psychology
Cavanagh et al.  0.3548641 0.1963624 low WLC short mindfulness students general
DanitzOrsillo 1.7911700 0.3455692 high WLC short ACT undergraduate students general
de Vibe et al.  0.1824552 0.1177874 low no intervention short mindfulness undergraduate students general
Frazier et al.  0.4218509 0.1448128 low information only short PCI students psychology
Frogeli et al.  0.6300000 0.1960000 low no intervention short ACT undergraduate students nursing students

Importing dataset from other apps

  1. CSV files
databin <- read.csv(file, header = TRUE, sep = ",", dec = ".", fill = TRUE)
View(databin)
  1. SAS, STATA, and SPSS
# Importing SAS files
install.packages("haven")
library(haven)
read_sas(data_file.sas)
View(data_file)

# Importing SPSS files
install.packages("haven")
library(haven)
read_spss(data_file.sav)
View(data_file)

# Importing STATA files
install.packages("haven")
library(haven)
read_stata(data_file.dta)
View(data_file)

Introduction to R environment

Basic commands

Starting commands:
  1. install.packages("") : Install packages to R library
  2. devtools::install.github("") : Install GitHub packages to R library
  3. library() : Load packages into R session
  4. vignette() : View package vignettes
  5. help() / ? : Package/command documentation
  6. class() : Inspecting object classes
Inspecting datasets:
  1. head() : to return the first N (default 6) rows of the dataset
  2. tail() : to return the last N (default 6) rows of the dataset
  3. View() : to view the dataset in Source window
  4. $ : selecting a column in a dataset (e.g., dataset$col1)
Creating and modifying variables
  1. List : list1 <- c("A", "B", "C", "D")
  2. Data frame : df1 <- data.frame("names" = c("A", "B", "C", "D"))

Operators in R

Arithmetic operators:

Operator Description
+ addition
- subtraction
* multiplication
/ division
^ or ** exponentiation

Logical operators:

Operator Description
> greater than
>= greater than or equal to
< smaller than
<= smaller than or equal to
== exactly equal to
!= not equal to

Basic visualizations with R

Plotting a simple graph

data(mtcars)
plot(mtcars$mpg, mtcars$hp)

Styling a simple graph

data(mtcars)
plot(mtcars$mpg, mtcars$hp, pch = 16, col = "black", xlab = "Miles/(US) gallon",
    ylab = "Gross horsepower")
abline(lm(mtcars$hp ~ mtcars$mpg))

Using ggplot2 to style the graph

data(mtcars)
ggplot(data = mtcars, aes(x = mpg, y = hp)) +
  geom_point() +
  geom_smooth(method = 'lm') +
  xlab("Miles/(US) gallon") +
  ylab ("Gross horsepower") +
  theme_classic()

Meta-analysis in R

Packages you will need for this tutorial:

  1. meta package
    A direct, intuitive package for general meta-analysis
  2. metafor package
    Similar to meta, but with a more comprehensive utility
  3. tidyverse package
    Contains dplyr and ggplot2 packages which will come in handy for manipulating tables and creating graphs

Other meta-analysis packages in R :

  1. dosresmeta package : Dose-response meta-analysis
  2. netmeta / gemtc package : Network meta-analysis
  3. mada package : Meta-analysis of DTA
  4. ipdmeta package : Individual patient data meta-analysis
  5. dmetar package : Additional analyses (e.g., p-hacking, power analysis, GOSH plot, etc.)
    Reference: Doing Meta-analysis in R by Matthias Harrier

meta vs metafor package

meta metafor
(+) Direct and straightforward implementation with minimal lines of code (+) A more comprehensive meta-analysis package
(+) Requires a single-step workflow to perform a meta-analysis (+) Includes most meta functions in addition to enhanced model-fitting capacities (e.g., log-likelihood estimates, AIC, BIC)
(+) Includes most general meta-analysis utilities (incl. meta-regression, interaction terms, modifiers, and Empirical Bayes estimators (+) Allows trim-and-fill analysis with the trimfill function, in addition to quantitative assessments of publication bias
(+) Allows quantitative assessment of publication bias (e.g., Egger, Begg) using metabias function (+) All meta-analysis models are built with rma.uni function
(+) Two standout functions: metagen and bubble.metareg (+) Standout function: Model-fitting functions
(-) Each data type requires different functions (e.g., metabin, metacont, metaprop, metagen) (-) Requires an additional step (two-steps workflow) to calculate effect sizes a priori with escalc function
Bottom-line message:
  • meta package for general meta-analysis methods
  • metafor package for model-fitting
  • You can opt to load both the meta and metafor packages to the environment.

Read the full documentation at: Lortie CJ, Filazzola A. A contrast of meta and metafor packages for meta-analyses in R. Ecol Evol. 2020;10:10916-21.

Breaking down the meta package

  1. Meta-analysis functions
    • metabind : Meta-analysis of binary outcome data
    • metacont : Meta-analysis of continuous outcome data
    • metagen : Generic inverse variance meta-analysis (pre-computed effect sizes)
    • metainc : Meta-analysis of incidence rates
    • metaprop : Meta-analysis of single proportions
    • metamean : Meta-analysis of single means
    • metarate : Meta-analysis of single incidence rates
    • metacor : Meta-analysis of single correlations
  2. Meta-analysis visualization functions
    • forest.meta, forest.metabind : Forest plot
    • funnel.meta : Funnel plot
    • bubble.metareg : Bubble plot to visualize the result of a meta-regression
    • baujat.meta : Baujat plot to explote heterogeneity in meta-analysis

  1. Additional functions
    • metareg : Meta-regression
    • metainf : Leave-one-out sensitivity analysis
    • metacum : Cumulative meta-analysis
    • metabias.meta : Statistical tests for funnel plot asymmetry
    • trimfill.meta : Trim-and-fill analysis to adjust for publication bias
  2. Estimators of random-effects meta-analysis
    1. Iterative/Non-iterative
      • Iterative : Converge to an estimate when a specific criterion is fulfilled
      • Non-iterative (Closed-form) : Estimates parameter in a predetermined number of steps
    2. Positive/Non-negative
      • Positive : (\(\tau\)2) > 0
      • Non-negative : (\(\tau\)2) = 0

The following are estimators available in the meta package. You can set which estimator to use by using the method.tau variable.

Code Estimator Type of Estimator Iterative/Non-iterative Positive/Non-negative Usage Suggestion
DL DerSimonian-Laird Moments estimator Non-iterative Non-negative The most commonly-used estimator and appears to be adequately sufficient
PM Paule-Mandel Moments estimator Iterative Non-negative Yields a more favorable profile to estimate the between-study variance, including DL
REML Restricted Maximum-Likelihood Maximum likelihood estimator Iterative Non-negative Preferrable for random-effects meta-analysis of continuous outcomes
ML Maximum-Likelihood Maximum likelihood estimator Iterative Non-negative Another alternative to REML
HS Hunter-Schmidt Moments estimator Non-iterative Non-negative Negatively biased. Returns smaller mean squared error than DL, REML, and HE. Should be avoided if unbiasedness is considered important
SJ Sidik-Jonkman Model error variance estimator Non-iterative Positive Yields a lower error rate than DL. May be an alternative to DL if adjusted by the Hartung-Knapp adjustment

Code Estimator Type of Estimator Iterative/Non-iterative Positive/Non-negative Usage
HE Hedges and Olkin Moments estimator Non-iterative Non-negative Returns larger estimates than other estimators. Performs well in the presence of substantial between-study heterogeneity, but comes with a large mean squared error. Preferrable when the number of studies is large (i.e. k >= 30) as it has smaller bias for large k
EB Empirical Bayes Bayes estimator Iterative Non-negative Preferrable in situations where appropriate prior information is available

Which estimator to choose?

  • Limited evidence on the comparison between estimators under the same simulation settings.
  • No direct recommendation for the best estimator, esp. when $k < 5 $
  • DerSimonian-Laird (DL) is the most extensively used estimator (and also performs adequately)
  • Use the Hartung-Knapp adjustment (hakn = TRUE) and prediction interval (prediction = TRUE) whenever possible

Bottom-line message:

  • DL for general meta-anaysis + HAKN + 95% PI
  • Paule-Mendel (PM) may be an alternative to DL
  • REML or ML for continuous outcomes
  • Bayesian (EB) estimator when prior information is adequately available and appropriate

For a more detailed guidance on the random-effects model selection: Veroniki AA, Jackson D, Viechtbauer W, Bender R, Bowden J, Knapp G, et al. Methods to estimate the between‐study variance and its uncertainty in meta-analysis. Res Synth Methods. 2016 Mar;7(1):55-79. doi: 10.1002/jrsm.1164

Installing and loading packages

We will install the meta, metafor, and tidyverse packages. The tidyverse package is a bundle of visualization and data frame tools, including: :

  • ggplot2 for data visualization
  • dplyr for data manipulation
  • tidyr for data tidying
  • readr for data import
  • purrr for functional programming
  • tibble for re-imagining of data frameset
  • stringr for strings, and
  • forcats for factors

# Installing the required packages
install.packages("meta", dependencies = TRUE)
install.packages("metafor", dependencies = TRUE)
install.packages("tidyverse", dependencies = TRUE)

# Loading the packages
library(meta)
library(metafor)
library(tidyverse)

Meta-analysis

madata <- metagen(TE = TE,
            seTE = seTE,
            data = databin,
            studlab = paste(Author),
            common = TRUE,
            random = TRUE,
            method.tau = "DL",
            hakn = FALSE,
            prediction = TRUE,
            sm = "RR")

# method.tau : "DL", "PM", "REML", "ML", "HS", "SJ", "HE", or "EB"
# sm : "RD", "RR", "OR", "ASD", "HR", "MD", "SMD", or "ROM"
# level.ci : Specifies the width of confidence interval (0.90, 0.95, 0.99)
# level.predict : Specifies the width of prediction interval (0.90, 0.95, 0.99)
# label.e, label.c : Labels for experimental and control group
# n.e., n.c. : Number of observations in the experimental and control groups
# label.left, label.right : Graph labels on the left and right side of forest plot
# title : Title of meta-analysis

summary(madata)

To visualize the meta-analysis, we will use forest() function

# To print the forest plot
forest(madata)

We can also style the forest plot to match Review Manager’s layout

# To print the forest plot
forest(madata, layout = "RevMan5", 
       col.square = "darkblue", col.square.lines = "darkblue")

Next, to change the x-axis limit

# To print the forest plot
forest(madata, layout = "RevMan5", 
       col.square = "darkblue", col.square.lines = "darkblue",
       at = c(0.1, 0.2, 0.5, 1, 2, 5, 10))

To save the plot:

  1. Using drop-down menu in the Plots window
    1. Click Export > Save as Image
    2. Set the File name, Width, and Height > Click Update Preview
    3. Click Save
  2. Using R code
    • Save as a PNG file :
      png("filename.png", width = , height =, units = "px")
    • Save as a JPG file :
      jpeg("filename.jpg", width = , height =, units = "px")
    • Save as a TIFF file :
      tiff("filename.tiff", width = , height =, units = "px")
    • Save as a BMP file :
      bmp("filename.bmp", width = , height =, units = "px")

Sensitivity analysis

  • To assess the robustness of meta-analysis model
  • In R, use the metainf function
# We will use our previous meta-analysis model 'madata'
sens.analysis <- metainf(madata)
forest(sens.analysis)

As seen in the figure above, we can estimate the effect size, 95% confidence intervals, and global heterogeneity (I2) of the leave-one-out models.

Subgroup analysis

  • To investigate heterogeneity or clinical effects between different groups

  • In R, use subgroup in the meta-analysis model to produce a subgroup analysis table

  • To visualize all subgroup analyses, use the metabind function.

    # First, we will examine the structure of the data 'databin'
    str(databin)
    ## tibble [18 × 10] (S3: tbl_df/tbl/data.frame)
    ##  $ Author               : chr [1:18] "Call et al." "Cavanagh et al." "DanitzOrsillo" "de Vibe et al." ...
    ##  $ TE                   : num [1:18] 0.709 0.355 1.791 0.182 0.422 ...
    ##  $ seTE                 : num [1:18] 0.261 0.196 0.346 0.118 0.145 ...
    ##  $ rob                  : chr [1:18] "low" "low" "high" "low" ...
    ##  $ control              : chr [1:18] "WLC" "WLC" "WLC" "no intervention" ...
    ##  $ intervention duration: chr [1:18] "short" "short" "short" "short" ...
    ##  $ intervention type    : chr [1:18] "mindfulness" "mindfulness" "ACT" "mindfulness" ...
    ##  $ population           : chr [1:18] "undergraduate students" "students" "undergraduate students" "undergraduate students" ...
    ##  $ type of students     : chr [1:18] "psychology" "general" "general" "general" ...
    ##  $ prevention type      : chr [1:18] "selective" "universal" "universal" "universal" ...

We will try to visualize the results of subgroup analyses based on ‘rob’, ‘control’, ‘population’, and ‘gender’

   ## Subgroup analysis based on 'rob'
   madata_rob <- metagen(TE = TE,
            seTE = seTE,
            data = databin,
            studlab = paste(Author),
            common = TRUE,
            random = FALSE,
            method.tau = "DL",
            hakn = FALSE,
            prediction = FALSE,
            sm = "RR", subgroup = rob)
            
     # or
     madata_rob <- update(madata, subgroup = rob, bylab = "Risk of bias")

Do the same for control, population, and gender variables

   ## Subgroup analysis based on 'Control'
    madata_cont <- metagen(TE = TE,
            seTE = seTE,
            data = databin,
            studlab = paste(Author),
            common = TRUE,
            random = FALSE,
            method.tau = "DL",
            hakn = FALSE,
            prediction = FALSE,
            sm = "RR", byvar = control)

    # or 
    madata_cont <- update(madata, subgroup = control, bylab = "Control group")

    ## Subgroup analysis based on 'type of students'
    madata_pop <- metagen(TE = TE,
            seTE = seTE,
            data = databin,
            studlab = paste(Author),
            common = TRUE,
            random = FALSE,
            method.tau = "DL",
            hakn = FALSE,
            prediction = FALSE,
            sm = "RR", subgroup = population)
    
    # or 
    madata_pop <- update(madata, subgroup = population, bylab="Population")

   ## Subgroup analysis based on 'gender'
   madata_gender <- metagen(TE = TE,
            seTE = seTE,
            data = databin,
            studlab = paste(Author),
            common = TRUE,
            random = FALSE,
            method.tau = "DL",
            hakn = FALSE,
            prediction = FALSE,
            sm = "RR", subgroup = gender)

     # or
     madata_gender <- update(madata, subgroup = gender, bylab="Gender")

Lastly, we visualize all the analyses using the metabind function

# Combining and visualizing the subgroup analyses <b>
madata_subgroup <- metabind(madata_rob, madata_cont, madata_pop, madata_gender)
forest(madata_subgroup, xlim = c(0.5, 5))

Performing meta-regression

To perform meta-regression, we will call data(Fleiss93) from the package meta.

data(Fleiss93)
str(Fleiss93)
## 'data.frame':    7 obs. of  6 variables:
##  $ study  : chr  "MRC-1" "CDP" "MRC-2" "GASP" ...
##  $ year   : int  1974 1976 1979 1979 1980 1980 1988
##  $ event.e: int  49 44 102 32 85 246 1570
##  $ n.e    : int  615 758 832 317 810 2267 8587
##  $ event.c: int  67 64 126 38 52 219 1720
##  $ n.c    : int  624 771 850 309 406 2257 8600

ma.aspimi <- metabin(event.e = event.e, n.e = n.e, event.c = event.c, n.c = n.c,
    data = Fleiss93, studlab = study)

forest(ma.aspimi)

mareg.aspimi <- metareg(ma.aspimi, ~year)
summary(mareg.aspimi)

bubble(mareg.aspimi, studlab = TRUE)
legend("topright", "B=0.01 (95%CI -0.02, 0.04), p=0.380")

Publication bias assessments

  • To investigate small-study effects and potential publication biases
  • Should only be performed when the number of studies is $k > 10 $
  • In R, use the funnel() and metabias() functions
  • In this presentation, we will use the metabin.xlsx dataset

Loading the required dataset

library(readxl)
data.binary <- read_excel("data/metabin.xlsx")
View(data.binary)
Author Ee Ne Ec Nc
Alcorta-Fleischmann 2 279 1 70
Craemer 18 1273 17 1287
Eriksson 6 1858 5 1852
Jones 3 297 6 314
Knauer 0 300 1 295
Kracauer 8 1331 9 1359

Producing a funnel plot

funnel(madata, xlab = "Risk Ratio")

Additional values which can be set :

funnel(x, xlim = "", ylim = "", xlab = "", ylab = "", 
            common = "", random = "", 
            cex = , lwd = , pch = , 
            studlab = "", level = , ref = , 
            col = c(), bg = "")`
  • xlim and ylim to set limits for x-axis and y-axis, respectively
  • xlab and ylab to set labels for x-axis and y-axis, respectively
  • common and random to plot fixed-effects and random-effects model, respectively
  • studlab to specify whether study labels should be printed (default = FALSE)
  • level to specify the width of confidence intervals used in the plot (default = 0.95)
  • ref to indicate whether reference value (null effect) should be printed (default = TRUE)
  • cex to scale plotting symbols
  • lwd to specify line width
  • pch to specify which symbols to be used for individual studies
  • col to set colors of plotting symbols
  • bg to set background color of plotting symbols

Performing statistical tests for publication bias

# To perform Egger's test
metabias(madata, method.bias = "Egger")
## Linear regression test of funnel plot asymmetry
## 
## Test result: t = 4.68, df = 16, p-value = 0.0003
## 
## Sample estimates:
##    bias se.bias intercept se.intercept
##  4.1111  0.8790   -0.3407       0.1837
## 
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 1.2014)
## - predictor: standard error
## - weight:    inverse variance
## - reference: Egger et al. (1997), BMJ

# To perform Begg's test
metabias(madata, method.bias = "Begg")
## Rank correlation test of funnel plot asymmetry
## 
## Test result: z = 3.07, p-value = 0.0022
## 
## Sample estimates:
##       ks   se.ks
##  81.0000 26.4008
## 
## - reference: Begg & Mazumdar (1993), Biometrics

Other statistical tests available in R can be accessed by altering the method.bias = "" value. The following values are allowed: Begg, Egger, Thompson, Harbord, Peters, Macaskill, Schwarzer, Deeks

Useful tips :

  • P-value for significance should be set at p < 0.10
  • Egger’s test should be the primary choice in most cases
  • For reviews using OR or SMD, use Harbord test Harbord test (method.bias = "Harbord") or Peters test (method.bias = "Peters")
  • For DTA reviews, Deeks’ test (method.bias = "Deeks") can be used, although caution should be taken due to low power to detect small study-effects when there is heterogeneity in the DOR

Combining funnel plot and p-values

funnel(madata, xlab = "Risk Ratio")
legend("topleft", "Egger's p<0.001")

Contour-enhanced funnel plot

A contour-enhanced funnel plot may be a preferrable approach to investigate sources of publication bias :

  • Reporting bias (Unpublished non-significant studies)
  • Non-reporting biases
  • Poor methodological quality
  • True heterogeneity
  • Artefactual factors (e.g., OR, SMD)
  • Chance

Producing a contour-enhanced funnel plot

funnel(madata, xlab = "Risk Ratio",
    contour = c(.90,.95,.99),
    col.contour = c("grey25","grey50","grey75"), bg = "white")
### Adjust contour colors by altering the 'col.contour=c()' values

# To create a legend box explaining the significance of each contour
legend("topright", legend=c("0.05 < p \u2264 0.10", 
                            "0.01 < p \u2264 0.05", 
                            "p < 0.01"
                            ),
        fill=c("grey25","grey50","grey75"))

The previous code will result in the following figure:

Performing a trim-and-fill analysis

To adjust for unpublished studies when publication bias is present. In R, use the trimfill() function

tf <- trimfill(madata)
# To produce a summary of the trim-and-fill analysis
summary(tf)

# To produce the funnel plot after trim-and-fill adjustments
tf <- trimfill(madata)
funnel(tf)

Risk of bias assessment

We will use the built-in datasets named data_rob2, data_robins_i, and data_quadas from the package robvis.

# Installing and loading the required packages
install.packages("robvis", dependencies = TRUE)
library(robvis)

# Loading files from CSV File containing RoB assessment with the Cochrane RoB
# tool
View(data_rob2)

## File containing RoB assessment with the ROBINS-I tool
View(data_robins_i)

## File containing RoB assessment with the QUADAS tool
View(data_quadas)

Only 4 RoB tools are available in the package : Cochrane RoB, Cochrane RoB 2.0, QUADAS-2, and ROBINS-I. However, you can add other tool using tool = "General"

Two main robvis functions :

  • rob_summary(x, tool = '', weighted = , overall = ) to produce summary of RoB assessments
    • tool = specifies the RoB tools (ROB2, QUADAS-2, ROBINS-I, General)
    • weighted = indicates the weighing of RoB summary according to study precision (default = TRUE)
    • overall = indicates whether the overall summary of RoB should be printed (default = FALSE)
    • col = to adjust/change colors
  • rob_traffic_light(x, tool = '', psize = ) to produce results of item-specific RoB assessments
    • tool = specifies the RoB tools (ROB2, QUADAS-2, ROBINS-I, General)
    • psize = specifies the size of item-specific RoB assessment results
    • col = to adjust/change colors, to adjust :
      • Cochrane (default) : col = "cochrane"
      • Colorblind : col = "colourblind"
      • Specific colors : col = c('')

Note that the colors were picked from hex color codes and specified in order of ascending risk of bias (e.g., ‘Low’ -> ‘Critical’), with the first hex corresponding to “Low” risk of bias.

# To print the summary of RoB assessments
rob_summary(data_rob2, tool = "ROB2", weighted = FALSE, overall = TRUE)

# To print the item-specific results of RoB assessments
rob_traffic_light(data_rob2, tool = "ROB2", psize = 8)

# There are only two predefined color scheme: `cochrane`(default) and
# `colourblind`
rob_summary(data_quadas, tool = "QUADAS-2", col = "colourblind", weighted = FALSE)

# Alternatively, we can define the color to our liking by using `colour = c()`.
rob_summary(data_quadas, tool = "QUADAS-2", colour = c("#0b03fc", "#ff6600", "#ff0000",
    "#00b034"), weighted = FALSE)

rob_traffic_light(data_robins_i, tool = "ROBINS-I", col = "cochrane", psize = 8)

Further reading

  1. Cochrane Handbook v.6.1.: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020). Cochrane, 2020. Available from www.training.cochrane.org/handbook.
  2. Harrer M, Cuijpers P, Furukawa TA, Ebert DD. Doing Meta-Analysis in R: A hands-on guide. 2019. doi: 10.5281/zenodo.2551803
  3. Schwarzer G, Carpenter JR, Rücker G. Meta-analysis with R. Switzerland: Springer International Publishing; 2015. link.
  4. Contour-enhanced funnel plot: Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. J Clin Epidemiol. 2008 Oct;61(10):991-6. doi: 10.1016/j.jclinepi.2007.11.010.
  5. robvis package manual: McGuinness LA. Introduction to robvis, a visualization tool for risk-of-bias assessments. 2019 Nov 18. Available from: https://cran.r-project.org/web/packages/robvis/vignettes/Introduction_to_robvis.html

Thank you. Questions?

Workshop

Preparing for Workshop

  1. Install and load metadat package into the library
  2. Use dat.anand1999 data from the metadat package
  3. Inspect the dataset

dat.anand199 is a dataset containing results from 34 trials examining the effectiveness of oral anticoagulants in patients with coronary artery diseas

Perform meta-analysis

  1. Perform a random-effects meta-analysis, visualize the output in a forest plot, and interpret the results
  2. Perform sensitivity analysis by leave-one-out method, and interpret the results
  3. Perform subgroup analysis by anticoagulation intensity, and interpret the results
  4. Perform meta-regression by publication year, and interpret the results
  5. Perform publication bias assessment with funnel plot asymmetry and Egger’s test, and interpret the results

Perform risk of bias visualization

  1. Load robvis package and data_robins_i package into the library
  2. Perform risk of bias visualization

Thank you. Questions?